Close

%0 Conference Proceedings
%4 sid.inpe.br/sibgrapi/2021/09.03.21.30
%2 sid.inpe.br/sibgrapi/2021/09.03.21.30.51
%@doi 10.1109/SIBGRAPI54419.2021.00036
%T Data Augmentation Guidelines for Cross-Dataset Transfer Learning and Pseudo Labeling
%D 2021
%A Santos, Fernando Pereira dos,
%A Thumé, Gabriela Salvador,
%A Ponti, Moacir Antonelli,
%@affiliation Universidade de São Paulo 
%@affiliation Universidade de São Paulo 
%@affiliation Universidade de São Paulo
%E Paiva, Afonso ,
%E Menotti, David ,
%E Baranoski, Gladimir V. G. ,
%E Proença, Hugo Pedro ,
%E Junior, Antonio Lopes Apolinario ,
%E Papa, João Paulo ,
%E Pagliosa, Paulo ,
%E dos Santos, Thiago Oliveira ,
%E e Sá, Asla Medeiros ,
%E da Silveira, Thiago Lopes Trugillo ,
%E Brazil, Emilio Vital ,
%E Ponti, Moacir A. ,
%E Fernandes, Leandro A. F. ,
%E Avila, Sandra,
%B Conference on Graphics, Patterns and Images, 34 (SIBGRAPI)
%C Gramado, RS, Brazil (virtual)
%8 18-22 Oct. 2021
%I IEEE Computer Society
%J Los Alamitos
%S Proceedings
%K transfer learning, deep learning, data augmentation.
%X Convolutional Neural Networks require large amounts of labeled data in order to be trained. To improve such performances, a practical approach widely used is to augment the training set data, generating compatible data. Standard data augmentation for images includes conventional techniques, such as rotation, shift, and flip. In this paper, we go beyond such methods by studying alternative augmentation procedures for cross-dataset scenarios, in which a source dataset is used for training and a target dataset is used for testing. Through an extensive analysis considering different paradigms, saturation, and combination procedures, we provide guidelines for using augmentation methods in favor of transfer learning scenarios. As a novel approach for self-supervised learning, we also propose data augmentation techniques as pseudo labels during training. Our techniques demonstrate themselves as robust alternatives for different domains of transfer learning, including benefiting scenarios for self-supervised learning.
%@language en
%3 paper112.pdf


Close